在在线广告中,可以通过某个拍卖系统对一系列潜在的广告进行排名,其中通常会在广告领域中选择和展示Top-1广告。在本文中,我们显示了拍卖系统中存在的选择偏差问题。我们分析选择偏见破坏了拍卖的真实性,这意味着拍卖中的买家(广告商)无法最大化其利润。尽管选择偏见在统计领域是众所周知的,并且有很多研究,但我们的主要贡献是将偏见的理论分析与拍卖机制相结合。在使用在线A/B测试的实验中,我们评估了拍卖系统上的选择偏差,该拍卖系统的排名得分是预测的CTR(单击率)广告的函数。该实验表明,通过使用多任务学习来学习所有广告的数据,可以大大降低选择偏差。
translated by 谷歌翻译
Diagnostic radiologists need artificial intelligence (AI) for medical imaging, but access to medical images required for training in AI has become increasingly restrictive. To release and use medical images, we need an algorithm that can simultaneously protect privacy and preserve pathologies in medical images. To develop such an algorithm, here, we propose DP-GLOW, a hybrid of a local differential privacy (LDP) algorithm and one of the flow-based deep generative models (GLOW). By applying a GLOW model, we disentangle the pixelwise correlation of images, which makes it difficult to protect privacy with straightforward LDP algorithms for images. Specifically, we map images onto the latent vector of the GLOW model, each element of which follows an independent normal distribution, and we apply the Laplace mechanism to the latent vector. Moreover, we applied DP-GLOW to chest X-ray images to generate LDP images while preserving pathologies.
translated by 谷歌翻译
Edema is a common symptom of kidney disease, and quantitative measurement of edema is desired. This paper presents a method to estimate the degree of edema from facial images taken before and after dialysis of renal failure patients. As tasks to estimate the degree of edema, we perform pre- and post-dialysis classification and body weight prediction. We develop a multi-patient pre-training framework for acquiring knowledge of edema and transfer the pre-trained model to a model for each patient. For effective pre-training, we propose a novel contrastive representation learning, called weight-aware supervised momentum contrast (WeightSupMoCo). WeightSupMoCo aims to make feature representations of facial images closer in similarity of patient weight when the pre- and post-dialysis labels are the same. Experimental results show that our pre-training approach improves the accuracy of pre- and post-dialysis classification by 15.1% and reduces the mean absolute error of weight prediction by 0.243 kg compared with training from scratch. The proposed method accurately estimate the degree of edema from facial images; our edema estimation system could thus be beneficial to dialysis patients.
translated by 谷歌翻译
Peripheral blood oxygen saturation (SpO2), an indicator of oxygen levels in the blood, is one of the most important physiological parameters. Although SpO2 is usually measured using a pulse oximeter, non-contact SpO2 estimation methods from facial or hand videos have been attracting attention in recent years. In this paper, we propose an SpO2 estimation method from facial videos based on convolutional neural networks (CNN). Our method constructs CNN models that consider the direct current (DC) and alternating current (AC) components extracted from the RGB signals of facial videos, which are important in the principle of SpO2 estimation. Specifically, we extract the DC and AC components from the spatio-temporal map using filtering processes and train CNN models to predict SpO2 from these components. We also propose an end-to-end model that predicts SpO2 directly from the spatio-temporal map by extracting the DC and AC components via convolutional layers. Experiments using facial videos and SpO2 data from 50 subjects demonstrate that the proposed method achieves a better estimation performance than current state-of-the-art SpO2 estimation methods.
translated by 谷歌翻译
Transparency of Machine Learning models used for decision support in various industries becomes essential for ensuring their ethical use. To that end, feature attribution methods such as SHAP (SHapley Additive exPlanations) are widely used to explain the predictions of black-box machine learning models to customers and developers. However, a parallel trend has been to train machine learning models in collaboration with other data holders without accessing their data. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with one another, undermining trust in the product. This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm (KernelSHAP) and Data Collaboration method of privacy-preserving distributed machine learning. In particular, we present three algorithms for different scenarios of explainability in Data Collaboration and verify their consistency with experiments on open-access datasets. Our results demonstrated a significant (by at least a factor of 1.75) decrease in feature attribution discrepancies among the users of distributed machine learning.
translated by 谷歌翻译
Our team, Hibikino-Musashi@Home (the shortened name is HMA), was founded in 2010. It is based in the Kitakyushu Science and Research Park, Japan. We have participated in the RoboCup@Home Japan open competition open platform league every year since 2010. Moreover, we participated in the RoboCup 2017 Nagoya as open platform league and domestic standard platform league teams. Currently, the Hibikino-Musashi@Home team has 20 members from seven different laboratories based in the Kyushu Institute of Technology. In this paper, we introduce the activities of our team and the technologies.
translated by 谷歌翻译
We propose "factor matting", an alternative formulation of the video matting problem in terms of counterfactual video synthesis that is better suited for re-composition tasks. The goal of factor matting is to separate the contents of video into independent components, each visualizing a counterfactual version of the scene where contents of other components have been removed. We show that factor matting maps well to a more general Bayesian framing of the matting problem that accounts for complex conditional interactions between layers. Based on this observation, we present a method for solving the factor matting problem that produces useful decompositions even for video with complex cross-layer interactions like splashes, shadows, and reflections. Our method is trained per-video and requires neither pre-training on external large datasets, nor knowledge about the 3D structure of the scene. We conduct extensive experiments, and show that our method not only can disentangle scenes with complex interactions, but also outperforms top methods on existing tasks such as classical video matting and background subtraction. In addition, we demonstrate the benefits of our approach on a range of downstream tasks. Please refer to our project webpage for more details: https://factormatte.github.io
translated by 谷歌翻译
深度神经网络(DNN)众所周知,很容易受到对抗例子的影响(AES)。此外,AE具有对抗性可传递性,这意味着为源模型生成的AE可以以非平凡的概率欺骗另一个黑框模型(目标模型)。在本文中,我们首次研究了包括Convmixer在内的模型之间的对抗性转移性的属性。为了客观地验证可转让性的属性,使用称为AutoAttack的基准攻击方法评估模型的鲁棒性。在图像分类实验中,Convmixer被确认对对抗性转移性较弱。
translated by 谷歌翻译
在本文中,我们提出了一种攻击方法,以阻止炒面的面部图像,尤其是加密 - 加压(ETC)应用图像,通过首次利用现有强大的stylegan编码器和解码器。我们专注于恢复可以从加密图像中揭示可识别信息的样式,而不是从加密图像中重建相同的图像。所提出的方法通过使用特定的训练策略使用普通和加密的图像对来训练编码器。尽管最新的攻击方法无法从ETC图像中恢复任何感知信息,但该建议的方法披露了个人身份信息,例如头发颜色,肤色,眼镜,性别等。结果表明,与普通图像相比,重建的图像具有一些感知的相似性。
translated by 谷歌翻译
在各种Web应用程序(例如数字广告和电子商务)中使用多模式数据的兴趣越来越大。从多模式数据中提取重要信息的典型方法取决于结合了来自多个编码器的特征表示的中型架构。但是,随着模态数量的增加,中融合模型结构的几个潜在问题会出现,例如串联多模式特征和缺失模态的维度增加。为了解决这些问题,我们提出了一个新概念,该概念将多模式输入视为一组序列,即深度多模式序列集(DM $^2 $ S $^2 $)。我们的设置感知概念由三个组成部分组成,这些组件捕获了多种模式之间的关系:(a)基于BERT的编码器来处理序列中元素间和内级内和内级的编码器,(b)模式内的残留物(Intramra)(Intramra) )捕获元素在模态中的重要性,以及(c)模式间残留的关注(Intermra),以进一步增强具有模态水平粒度的元素的重要性。我们的概念表现出与以前的设置感知模型相当或更好的性能。此外,我们证明了学识渊博的Intermra和Intramra权重的可视化可以提供对预测结果的解释。
translated by 谷歌翻译